Stability of Multi-Task Kernel Regression Algorithms
نویسندگان
چکیده
We study the stability properties of nonlinear multi-task regression in reproducing Hilbert spaces with operator-valued kernels. Such kernels, a.k.a. multi-task kernels, are appropriate for learning problems with nonscalar outputs like multi-task learning and structured output prediction. We show that multi-task kernel regression algorithms are uniformly stable in the general case of infinite-dimensional output spaces. We then derive under mild assumption on the kernel generalization bounds of such algorithms, and we show their consistency even with non Hilbert-Schmidt operator-valued kernels. We demonstrate how to apply the results to various multi-task kernel regression methods such as vector-valued SVR and functional ridge regression.
منابع مشابه
Multi-Task Learning Using Neighborhood Kernels
This paper introduces a new and effective algorithm for learning kernels in a Multi-Task Learning (MTL) setting. Although, we consider a MTL scenario here, our approach can be easily applied to standard single task learning, as well. As shown by our empirical results, our algorithm consistently outperforms the traditional kernel learning algorithms such as uniform combination solution, convex c...
متن کاملThe relationship between parents' rating and performance-based measure of executive function in preschool children
Introduction: Both performance-based and rating measures are used to evaluate preschool children’s executive functions. This study aimed to investigate the relationship between performance-based tasks and parental rating of executive functions in preschool children. Method: The present study was a descriptive correlational study. The current study population consisted of all 4 and 5-years-old p...
متن کاملComparison bewteen multi-task and single-task oracle risks in kernel ridge regression
In this paper we study multi-task kernel ridge regression and try to understand when the multi-task procedure performs better than the single-task one, in terms of averaged quadratic risk. In order to do so, we compare the risks of the estimators with perfect calibration, the oracle risk. We are able to give explicit settings, favorable to the multi-task procedure, where the multi-task oracle p...
متن کاملThe Use of Stability Principle for Kernel Determination in Relevance Vector Machines
The task of RBF kernel selection in Relevance Vector Machines (RVM) is considered. RVM exploits a probabilistic Bayesian learning framework offering number of advantages to state-of-the-art Support Vector Machines. In particular RVM effectively avoids determination of regularization coefficient C via evidence maximization. In the paper we show that RBF kernel selection in Bayesian framework req...
متن کاملSemi-Supervised Multi-Task Regression
Labeled data are needed for many machine learning applications but the amount available in some applications is scarce. Semi-supervised learning and multi-task learning are two of the approaches that have been proposed to alleviate this problem. In this paper, we seek to integrate these two approaches for regression applications. We first propose a new supervised multi-task regression method ca...
متن کامل